本文重新访问了符号回归的数据集和评估标准,该任务是使用数学方程式表达给定数据的任务,特别关注其科学发现的潜力。专注于基于Feynman物理学讲座的现有数据集中使用的一组公式,我们重新创建了120个数据集,以讨论科学发现(SRSD)符号回归的性能。对于120个SRSD数据集中的每个数据集,我们仔细查看公式及其变量的属性,以设计合理逼真的值的值范围,以便可以使用我们的新SRSD数据集来评估SRSD的潜力,例如SR方法是否是SR方法con(re)从此类数据集中发现物理定律。作为评估度量,我们还建议在预测方程和地面方程树之间使用归一化的编辑距离。虽然现有指标是目标值和SR模型之间的二进制或误差,但标准化的编辑距离评估了地面真相和预测方程树之间的相似性。我们已经使用SRBENCH中的五种最先进的SR方法在新的SRSD数据集上进行了实验,并基于最新的变压器体系结构进行了简单的基线。结果表明,我们提供了更现实的性能评估,并为科学发现开辟了新的基于机器学习的方法。我们的数据集和代码存储库公开可用。
translated by 谷歌翻译
Recent work has identified noisy and misannotated data as a core cause of hallucinations and unfaithful outputs in Natural Language Generation (NLG) tasks. Consequently, identifying and removing these examples is a key open challenge in creating reliable NLG systems. In this work, we introduce a framework to identify and remove low-quality training instances that lead to undesirable outputs, such as faithfulness errors in text summarization. We show that existing approaches for error tracing, such as gradient-based influence measures, do not perform reliably for detecting faithfulness errors in summarization. We overcome the drawbacks of existing error tracing methods through a new, contrast-based estimate that compares undesired generations to human-corrected outputs. Our proposed method can achieve a mean average precision of 0.91 across synthetic tasks with known ground truth and can achieve a two-fold reduction in hallucinations on a real entity hallucination evaluation on the NYT dataset.
translated by 谷歌翻译
Task-oriented dialogue systems often assist users with personal or confidential matters. For this reason, the developers of such a system are generally prohibited from observing actual usage. So how can they know where the system is failing and needs more training data or new functionality? In this work, we study ways in which realistic user utterances can be generated synthetically, to help increase the linguistic and functional coverage of the system, without compromising the privacy of actual users. To this end, we propose a two-stage Differentially Private (DP) generation method which first generates latent semantic parses, and then generates utterances based on the parses. Our proposed approach improves MAUVE by 3.8$\times$ and parse tree node-type overlap by 1.4$\times$ relative to current approaches for private synthetic data generation, improving both on fluency and semantic coverage. We further validate our approach on a realistic domain adaptation task of adding new functionality from private user data to a semantic parser, and show gains of 1.3$\times$ on its accuracy with the new feature.
translated by 谷歌翻译
Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions. Inspired by collaborative programming, we propose Coder-Reviewer reranking. We augment Coder language models from past work, which generate programs given language instructions, with Reviewer models, which evaluate the likelihood of the instruction given the generated programs. We perform an extensive study across six datasets with eight models from three model families. Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement (up to 17% absolute accuracy gain) over reranking with the Coder model only. When combined with executability filtering, Coder-Reviewer reranking can often outperform the minimum Bayes risk method. Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyperparameters.
translated by 谷歌翻译
Machine learning models are now able to convert user-written text descriptions into naturalistic images. These models are available to anyone online and are being used to generate millions of images a day. We investigate these models and find that they amplify dangerous and complex stereotypes. Moreover, we find that the amplified stereotypes are difficult to predict and not easily mitigated by users or model owners. The extent to which these image-generation models perpetuate and amplify stereotypes and their mass deployment is cause for serious concern.
translated by 谷歌翻译
已显示迅速学习可以在大多数文本分类任务中实现近调调节性能,但很少有培训示例。对于样品稀缺的NLP任务是有利的。在本文中,我们试图将其应用于实际情况,即恢复信息提取,并增强现有方法,以使其更适用于简历信息提取任务。特别是,我们根据简历的文本特征创建了多组手动模板和语言器。此外,我们比较了蒙版语言模型(MLM)预培训语言模型(PLM)和SEQ2SEQ PLM在此任务上的性能。此外,我们改进了口头设计的设计方法,用于知识渊博的及时调整,以便为其他基于应用程序的NLP任务的迅速模板和语言设计的设计提供了示例。在这种情况下,我们提出了手动知识渊博的语言器(MKV)的概念。构造与应用程序方案相对应的知识渊博的口头表的规则。实验表明,基于我们的规则设计的模板和言语器比现有的手动模板更有效,更强大,并自动生成及时方法。已经确定,当前可用的自动提示方法无法与手动设计的及时模板竞争一些现实的任务方案。最终混淆矩阵的结果表明,我们提出的MKV显着解决了样本不平衡问题。
translated by 谷歌翻译
尽管自我监督学习(SSL)方法取得了经验成功,但尚不清楚其表示的哪些特征导致了高下游精度。在这项工作中,我们表征了SSL表示应该满足的属性。具体而言,我们证明了必要和充分的条件,因此,对于给出的数据增强的任何任务,在该表示形式上训练的所需探针(例如,线性或MLP)具有完美的准确性。这些要求导致一个统一的概念框架,用于改善现有的SSL方法并得出新方法。对于对比度学习,我们的框架规定了对以前的方法(例如使用不对称投影头)的简单但重大改进。对于非对比度学习,我们使用框架来得出一个简单新颖的目标。我们所得的SSL算法在标准基准测试上的表现优于基线,包括Imagenet线性探测的SHAV+多螺旋桨。
translated by 谷歌翻译
从互联网上刮下来的数据集对于大规模机器学习的成功至关重要。然而,这一成功使未来互联网衍生的数据集的实用性处于潜在的风险,因为模型输出开始取代人类注释作为监督的来源。在这项工作中,我们首先将与一个模型的交互作用记录为历史并将其作为培训数据的系统形式化。然后,我们通过跟踪对测试时间偏差统计量的变化(例如,模型预测的性别偏差)来分析其稳定性。我们发现,偏置扩增的程度与模型的输出的行为是否像训练分布中的样本一样,我们表征并将其定义为一致的校准。在三种条件预测方案中的实验 - 图像分类,视觉角色标记和语言生成 - 证明表现出样本样行为的模型更加校准,因此更稳定。基于这种见解,我们提出了一项干预措施,以帮助校准和稳定不稳定的反馈系统。代码可从https://github.com/rtaori/da​​ta_feedback获得。
translated by 谷歌翻译
信息提取(IE)一直是NLP的重要任务之一。此外,信息提取的最关键应用程序方案之一是简历的信息提取。通过对简历的每个部分进行分类来获得构造的文本。存储这些文本以供以后进行搜索和分析很方便。此外,构造的简历数据也可以在AI简历筛选系统中使用。大大降低人力资源的劳动成本。这项研究旨在将简历的信息提取任务转变为简单的句子分类任务。基于先前研究生产的英语简历数据集。改进了分类规则,以创建简历的更大,更细粒度的分类数据集。该语料库还用于测试一些当前主流培训语言模型(PLMS)性能。Furthermore,为了探索培训样本数量与简历数据集的正确性率之间的关系,我们还与培训进行了比较实验一组不同的火车集尺寸。最终的多个实验结果表明,具有改进的注释规则和数据集的样本大小的简历数据集提高了原始简历数据集的准确性。
translated by 谷歌翻译
剪辑的发展[Radford等,2021]引发了关于语言监督是否可以导致与传统仅图像方法更可转移表示的视觉模型的争论。我们的工作通过对两种方法的学习能力进行了对下游分类任务的学习能力进行仔细控制的比较来研究这个问题。我们发现,当预训练数据集符合某些标准时 - 它足够大,并且包含具有较低变异性的描述性字幕 - 仅图像的方法也与剪辑的传输性能不匹配,即使它们接受了更多图像数据的培训。但是,与人们期望的相反,在某些情况下,没有满足这些标准,其中通过标题增加的监督实际上是有害的。在我们的发现的激励下,我们设计了简单的处方,以使剪辑能够更好地利用现有预训练数据集中存在的语言信息。
translated by 谷歌翻译